fix(redis): use POST body for setCachedJson to avoid ECONNRESET with large payloads#2524
fix(redis): use POST body for setCachedJson to avoid ECONNRESET with large payloads#2524bighaoZQH wants to merge 3 commits intokoala73:mainfrom
Conversation
|
@bighaoZQH is attempting to deploy a commit to the Elie Team on Vercel. A member of the Team first needs to authorize it. |
Greptile SummaryThis PR fixes a real Key findings:
Confidence Score: 4/5Safe to merge after addressing the pipeline timeout — the core fix is correct but the remaining 1,500 ms timeout may still cause failures for large payloads on slower networks. The approach is correct and well-motivated. A single P1 remains: the REDIS_OP_TIMEOUT_MS (1,500 ms) is inconsistent with every other pipeline call in the file (all use 5,000 ms) and could partially re-introduce timeout failures for the large payloads this PR targets. A one-line fix addresses it. The P2 (missing resp.ok guard) is pre-existing behaviour and non-blocking. server/_shared/redis.ts — timeout constant on line 94 should be REDIS_PIPELINE_TIMEOUT_MS Important Files Changed
Sequence DiagramsequenceDiagram
participant Caller
participant setCachedJson
participant RedisRestProxy as Redis REST Proxy (/pipeline)
Note over Caller,RedisRestProxy: Before (ECONNRESET on large payloads)
Caller->>setCachedJson: key, value (~126 KB), ttlSeconds
setCachedJson->>RedisRestProxy: POST /set/{encodeURIComponent(key)}/{encodeURIComponent(value)}/EX/{ttl}
RedisRestProxy-->>setCachedJson: ECONNRESET / EPIPE (URL > 16 KB)
Note over Caller,RedisRestProxy: After (body-based pipeline)
Caller->>setCachedJson: key, value (~126 KB), ttlSeconds
setCachedJson->>RedisRestProxy: POST /pipeline, body: [[SET, prefixedKey, value, EX, ttl]]
RedisRestProxy-->>setCachedJson: [{result: OK}]
setCachedJson-->>Caller: void
Reviews (1): Last reviewed commit: "fix(redis): use pipeline body for setCac..." | Re-trigger Greptile |
server/_shared/redis.ts
Outdated
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(pipeline), | ||
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), |
There was a problem hiding this comment.
Wrong timeout for pipeline endpoint
The fix correctly moves large payloads to the request body, but the timeout is still REDIS_OP_TIMEOUT_MS (1,500 ms) — the same budget as the old single-command URL call. Every other pipeline call in this file (getCachedJsonBatch, geoSearchByBox, getHashFieldsBatch, runRedisPipeline) uses REDIS_PIPELINE_TIMEOUT_MS (5,000 ms), which exists precisely because pipelines can be slower, and because large payload transfers take more time over the wire.
For the news:digest:v1 key (~126 KB), transmitting the body to a self-hosted Redis REST proxy within 1,500 ms may still fail on slower or containerised networks, partially re-introducing the timeout failures this PR is meant to fix.
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), | |
| signal: AbortSignal.timeout(REDIS_PIPELINE_TIMEOUT_MS), |
server/_shared/redis.ts
Outdated
| await fetch(`${url}/pipeline`, { | ||
| method: 'POST', | ||
| headers: { Authorization: `Bearer ${token}` }, | ||
| headers: { | ||
| Authorization: `Bearer ${token}`, | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(pipeline), | ||
| signal: AbortSignal.timeout(REDIS_OP_TIMEOUT_MS), | ||
| }); |
There was a problem hiding this comment.
HTTP error response not checked
Every other pipeline function in this file (getCachedJsonBatch, geoSearchByBox, getHashFieldsBatch, runRedisPipeline) checks resp.ok and logs or returns early on non-2xx responses. The new implementation silently swallows a non-2xx status (e.g., a 429 rate-limit or 500 from the proxy), so a persistent write failure won't surface in logs.
Consider adding the same guard the other pipeline helpers use:
const resp = await fetch(`${url}/pipeline`, {
method: 'POST',
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify(pipeline),
signal: AbortSignal.timeout(REDIS_PIPELINE_TIMEOUT_MS),
});
if (!resp.ok) {
console.warn(`[redis] setCachedJson HTTP ${resp.status}`);
}Replace URL-path encoding with POST / body to avoid Node.js 16KB URL length limit causing ECONNRESET on large payloads. Keeps single-command semantics and REDIS_OP_TIMEOUT_MS — pipeline is not needed here.
|
@bighaoZQH — good catch and well-documented fix. Root cause analysis is correct and POST body approach is consistent with other pipeline helpers. One issue to fix: AbortSignal.timeout uses REDIS_OP_TIMEOUT_MS (1,500ms), but all other pipeline calls use REDIS_PIPELINE_TIMEOUT_MS (5,000ms). For the ~126KB payloads motivating this PR, 1.5s may cause intermittent timeouts. Please change to REDIS_PIPELINE_TIMEOUT_MS. Once updated, this is ready to merge. |
|
@SebastienMelki Hello, thank you for the review! I initially switched setCachedJson to use the /pipeline endpoint, but after re-reading the code I realized that setCachedJson is designed for single-command semantics — using /pipeline here was unnecessary. The latest commits refine the approach: Regarding P1: since we're no longer using /pipeline, the timeout inconsistency concern no longer strictly applies. The use of REDIS_PIPELINE_TIMEOUT_MS here is intentional for large payload tolerance, not because this is a pipeline call. Does this align with your expectation? Ready for re-review. |
There was a problem hiding this comment.
Why this PR?
Real, well-diagnosed bug. URL-path encoding 126KB payloads against a Node.js HTTP server that has a ~16KB URL limit causes silent ECONNRESET and the key is never written. Moving to POST body is the right fix. The screenshots, root-cause analysis, and before/after logs make this one of the more thorough external contributions we've had.
One thing to fix
Consume the response body after the resp.ok check.
The current code reads the status but leaves the response body undrained:
if (!resp.ok) {
console.warn(`[redis] setCachedJson HTTP ${resp.status}`); // no detail
}
// body never readEvery other function in redis.ts that checks resp.ok also reads the body (see getCachedJsonBatch, runRedisPipeline, etc.). Two problems with the current form:
- Silent Redis errors: Upstash returns
{ error: "ERR ..." }in the body even on HTTP 200 for Redis-level failures. Without reading the body, a failed write is logged as a success. - Connection hygiene: In Vercel Edge/Node.js, unconsumed response bodies can delay connection return to the pool.
Suggested fix (matches the established pattern):
const resp = await fetch(`${url}/`, { ... });
const data = await resp.json().catch(() => null) as { result?: string; error?: string } | null;
if (!resp.ok || data?.error) {
console.warn(`[redis] setCachedJson failed:`, data?.error ?? `HTTP ${resp.status}`);
}Everything else looks correct
JSON.stringify(value)is explicitly inside the array — serialization contract matches whatgetCachedJsonexpects (JSON.parse(data.result)). No double-parse risk.REDIS_PIPELINE_TIMEOUT_MS(5s) is appropriate for large payload writes — good call over the 1.5s op timeout.${url}/+ 1D array is the correct Upstash single-command endpoint (distinct from/pipeline+ 2D array used for batches). Not a consistency issue.String(ttlSeconds)is correct — the proxy already.map(String)s all args.
@bighaoZQH Please fix the response body gap and this is ready to merge. Thank you
Summary
Error writing data to Redis
Type of change
Affected areas
/api/*)Checklist
api/rss-proxy.jsallowlist (if adding feeds)npm run typecheck)Screenshots
Problem
setCachedJsoncurrently encodes the value into the URL path:When the payload is large (e.g.
news:digest:v1is ~126KB),encodeURIComponentinflates the data further, exceeding Node.js'sdefault HTTP URL length limit (~16KB). This causes
ECONNRESET/EPIPEerrors on every write, so the key is never persisted to Redis.The data falls back to the in-process
fallbackDigestCacheMap instead,which means the cache is lost on every container restart.
As shown in the image, I curled a request to the news module, but the logs showed errors related to the Redis link and that the relevant key was missing:


Root Cause
Node.js
http.createServerrejects requests with URLs exceeding ~16KB,and the redis-rest proxy (
redis-rest-proxy.mjs) uses Node.js's built-inHTTP server. Large payloads in the URL path are silently dropped.
Fix
Switch setCachedJson to use a POST request body instead of URL parameters, consistent with the pattern used by other Redis helpers in this file:
As shown in the image, I modified the


setCachedJsonmethod in theredis.tsfile, then rebuilt the project and repeated the previous steps. The problem is now resolved, and there are no more Redis-related errors in the logs:Impact
news:digest:v1never being persisted to Redis in self-hosted deployments